976 research outputs found

    Industry sponsorship bias in research findings: a network meta-analysis of LDL cholesterol reduction in randomised trials of statins

    Get PDF
    Objective: To explore the risk of industry sponsorship bias in a systematically identified set of placebo controlled and active comparator trials of statins. Design: Systematic review and network meta-analysis. Eligibility Open label and double blind randomised controlled trials comparing one statin with another at any dose or with control (placebo, diet, or usual care) for adults with, or at risk of developing, cardiovascular disease. Only trials that lasted longer than four weeks with more than 50 participants per trial arm were included. Two investigators assessed study eligibility. Data sources Bibliographic databases and reference lists of relevant articles published between 1 January 1985 and 10 March 2013. Data extraction One investigator extracted data and another confirmed accuracy. Main outcome measure Mean absolute change from baseline concentration of low density lipoprotein (LDL) cholesterol. Data synthesis Study level outcomes from randomised trials were combined using random effects network meta-analyses. Results: We included 183 randomised controlled trials of statins, 103 of which were two-armed or multi-armed active comparator trials. When all of the existing randomised evidence was synthesised in network meta-analyses, there were clear differences in the LDL cholesterol lowering effects of individual statins at different doses. In general, higher doses resulted in higher reductions in baseline LDL cholesterol levels. Of a total of 146 industry sponsored trials, 64 were placebo controlled (43.8%). The corresponding number for the non-industry sponsored trials was 16 (43.2%). Of the 35 unique comparisons available in 37 non-industry sponsored trials, 31 were also available in industry sponsored trials. There were no systematic differences in magnitude between the LDL cholesterol lowering effects of individual statins observed in industry sponsored versus non-industry sponsored trials. In industry sponsored trials, the mean change from baseline LDL cholesterol level was on average 1.77 mg/dL (95% credible interval −11.12 to 7.66) lower than the change observed in non-industry sponsored trials. There was no detectable inconsistency in the evidence network. Conclusions: Our analysis shows that the findings obtained from industry sponsored statin trials seem similar in magnitude as those in non-industry sources. There are actual differences in the effectiveness of individual statins at various doses that explain previously observed discrepancies between industry and non-industry sponsored trials

    Linguistics

    Get PDF
    Contains reports on four research projects.National Institute of Mental Health (Grant 1 PO1 MH-13390-04

    Automated generation of node-splitting models for assessment of inconsistency in network meta-analysis

    Get PDF
    Network meta-analysis enables the simultaneous synthesis of a network of clinical trials comparing any number of treatments. Potential inconsistencies between estimates of relative treatment effects are an important concern, and several methods to detect inconsistency have been proposed. This paper is concerned with the node-splitting approach, which is particularly attractive because of its straightforward interpretation, contrasting estimates from both direct and indirect evidence. However, node-splitting analyses are labour-intensive because each comparison of interest requires a separate model. It would be advantageous if node-splitting models could be estimated automatically for all comparisons of interest. We present an unambiguous decision rule to choose which comparisons to split, and prove that it selects only comparisons in potentially inconsistent loops in the network, and that all potentially inconsistent loops in the network are investigated. Moreover, the decision rule circumvents problems with the parameterisation of multi-arm trials, ensuring that model generation is trivial in all cases. Thus, our methods eliminate most of the manual work involved in using the node-splitting approach, enabling the analyst to focus on interpreting the results. (C) 2015 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd

    Evidence Synthesis for Decision Making 6:Embedding Evidence Synthesis in Probabilistic Cost-effectiveness Analysis

    Get PDF
    When multiple parameters are estimated from the same synthesis model, it is likely that correlations will be induced between them. Network meta-analysis (mixed treatment comparisons) is one example where such correlations occur, along with meta-regression and syntheses involving multiple related outcomes. These correlations may affect the uncertainty in incremental net benefit when treatment options are compared in a probabilistic decision model, and it is therefore essential that methods are adopted that propagate the joint parameter uncertainty, including correlation structure, through the cost-effectiveness model. This tutorial paper sets out 4 generic approaches to evidence synthesis that are compatible with probabilistic cost-effectiveness analysis. The first is evidence synthesis by Bayesian posterior estimation and posterior sampling where other parameters of the cost-effectiveness model can be incorporated into the same software platform. Bayesian Markov chain Monte Carlo simulation methods with WinBUGS software are the most popular choice for this option. A second possibility is to conduct evidence synthesis by Bayesian posterior estimation and then export the posterior samples to another package where other parameters are generated and the cost-effectiveness model is evaluated. Frequentist methods of parameter estimation followed by forward Monte Carlo simulation from the maximum likelihood estimates and their variance-covariance matrix represent’a third approach. A fourth option is bootstrap resampling—a frequentist simulation approach to parameter uncertainty. This tutorial paper also provides guidance on how to identify situations in which no correlations exist and therefore simpler approaches can be adopted. Software suitable for transferring data between different packages, and software that provides a user-friendly interface for integrated software platforms, offering investigators a flexible way of examining alternative scenarios, are reviewed

    Evidence Synthesis for Decision Making 5:The Baseline Natural History Model

    Get PDF
    Most cost-effectiveness analyses consist of a baseline model that represents the absolute natural history under a standard treatment in a comparator set and a model for relative treatment effects. We review synthesis issues that arise on the construction of the baseline natural history model. We cover both the absolute response to treatment on the outcome measures on which comparative effectiveness is defined and the other elements of the natural history model, usually “downstream” of the shorter-term effects reported in trials. We recommend that the same framework be used to model the absolute effects of a “standard treatment” or placebo comparator as that used for synthesis of relative treatment effects and that the baseline model is constructed independently from the model for relative treatment effects, to ensure that the latter are not affected by assumptions made about the baseline. However, simultaneous modeling of baseline and treatment effects could have some advantages when evidence is very sparse or when other research or study designs give strong reasons for believing in a particular baseline model. The predictive distribution, rather than the fixed effect or random effects mean, should be used to represent the baseline to reflect the observed variation in baseline rates. Joint modeling of multiple baseline outcomes based on data from trials or combinations of trial and observational data is recommended where possible, as this is likely to make better use of available evidence, produce more robust results, and ensure that the model is internally coherent

    Evidence Synthesis for Decision Making 2:A Generalized Linear Modeling Framework for Pairwise and Network Meta-analysis of Randomized Controlled Trials

    Get PDF
    We set out a generalized linear model framework for the synthesis of data from randomized controlled trials. A common model is described, taking the form of a linear regression for both fixed and random effects synthesis, which can be implemented with normal, binomial, Poisson, and multinomial data. The familiar logistic model for meta-analysis with binomial data is a generalized linear model with a logit link function, which is appropriate for probability outcomes. The same linear regression framework can be applied to continuous outcomes, rate models, competing risks, or ordered category outcomes by using other link functions, such as identity, log, complementary log-log, and probit link functions. The common core model for the linear predictor can be applied to pairwise meta-analysis, indirect comparisons, synthesis of multiarm trials, and mixed treatment comparisons, also known as network meta-analysis, without distinction. We take a Bayesian approach to estimation and provide WinBUGS program code for a Bayesian analysis using Markov chain Monte Carlo simulation. An advantage of this approach is that it is straightforward to extend to shared parameter models where different randomized controlled trials report outcomes in different formats but from a common underlying model. Use of the generalized linear model framework allows us to present a unified account of how models can be compared using the deviance information criterion and how goodness of fit can be assessed using the residual deviance. The approach is illustrated through a range of worked examples for commonly encountered evidence formats

    Evidence Synthesis for Decision Making 1:Introduction

    Get PDF
    We introduce the series of 7 tutorial papers on evidence synthesis methods for decision making, based on the Technical Support Documents in Evidence Synthesis prepared for the National Institute for Health and Clinical Excellence (NICE) Decision Support Unit. Although oriented to NICE’s Technology Appraisal process, which examines new pharmaceutical products in a cost-effectiveness framework, the methods presented throughout the tutorials are equally relevant to clinical guideline development and to comparisons between medical devices, or public health interventions. Detailed guidance is given on how to use the other tutorials in the series, which propose a single evidence synthesis framework that covers fixed and random effects models, pairwise meta-analysis, indirect comparisons, and network meta-analysis, and where outcomes expressed in several different reporting formats can be analyzed without recourse to normal approximations. We describe the principles of evidence synthesis required by the 2008 revision of the NICE Guide to the Methods of Technology Appraisal and explain how the approach proposed in these tutorials was designed to conform to those requirements. We finish with some suggestions on how to present the evidence, the synthesis methods, and the results
    corecore